# Lightweight language model
Zeta 2
MIT
Zeta 2 is a small language model (SLM) with approximately 460 million parameters, meticulously crafted on consumer-grade computers and supports multiple languages.
Large Language Model Supports Multiple Languages
Z
Zeta-LLM
26
3
Smollm2 135M Eagle
Apache-2.0
A lightweight Russian-English bilingual language model fine-tuned based on SmolLM2-135M, with enhanced Russian processing capabilities but notable limitations
Large Language Model Supports Multiple Languages
S
nyuuzyou
50
3
Qwen2.5 1.5B Instruct
Apache-2.0
A 1.5B parameter instruction fine-tuned model designed for Gensyn RL Swarm, supporting local fine-tuning via peer-to-peer reinforcement learning
Large Language Model
Transformers English

Q
Gensyn
2.1M
4
Llama 3.1 0x Mini
0x Mini is a lightweight language model developed by Ozone AI, optimized based on the Llama-3.1 architecture, providing efficient text generation capabilities
Large Language Model
Transformers

L
ozone-research
21
5
Llammlein 1B
Other
This is a German Tinyllama 1B language model trained from scratch using the Tinyllama code framework and RedPajama V2 German corpus.
Large Language Model
Transformers German

L
LSX-UniWue
304
14
Smollm 135M 4bit
Apache-2.0
This is a 4-bit quantized 135M parameter small language model, suitable for text generation tasks in resource-constrained environments.
Large Language Model
Transformers English

S
mlx-community
312
1
Mobillama 1B Chat
Apache-2.0
MobiLlama-1B-Chat is an instruction-following model fine-tuned from MobiLlama-1B, specifically designed for resource-constrained devices, emphasizing efficiency, low memory footprint, and fast response.
Large Language Model
Transformers English

M
MBZUAI
44
25
Mobillama 05B
MIT
MobiLlama-05B is a small language model (SLM) with 500 million parameters, focusing on applications for resource-constrained devices, providing efficient and low-memory text generation capabilities.
Large Language Model
Transformers Supports Multiple Languages

M
MBZUAI
187
41
Phi Hermes 1.3B
Other
Phi-1.5 model fine-tuned on the Hermes dataset, primarily used for text generation tasks
Large Language Model
Transformers English

P
teknium
45
44
Charllama 35M
Openrail
CharLLaMa-35M is a miniature language model based on the LLaMa architecture, featuring character-level tokenization, suitable for various experimental scenarios where BPE tokenization underperforms.
Large Language Model
Transformers Other

C
inkoziev
61
5
Koalpaca KoRWKV 1.5B
Apache-2.0
Korean language model fine-tuned on KoAlpaca dataset v1.0 based on KoRWKV-1.5B
Large Language Model
Transformers Korean

K
beomi
1,941
7
Xlm Roberta Base Uk
MIT
This is a reduced version of the XLM-RoBERTa model, specifically optimized for Ukrainian and partial English, with parameters reduced from 470 million to 134 million.
Large Language Model
Transformers Other

X
ukr-models
78
12
Roformer Chinese Char Small
RoFormer is a Chinese Transformer model enhanced with Rotary Position Embedding, suitable for text infilling tasks.
Large Language Model Chinese
R
junnyu
24
0
Minilmv2 L6 H384 Distilled From RoBERTa Large
MiniLMv2 is a lightweight language representation model developed by Microsoft, achieving efficient performance through knowledge distillation techniques.
Large Language Model
Transformers

M
nreimers
73
6
Minilmv2 L6 H384 Distilled From BERT Large
MiniLMv2 is a lightweight language representation model developed by Microsoft, achieving efficient inference through knowledge distillation techniques, suitable for various natural language processing tasks.
Large Language Model
Transformers

M
nreimers
14.21k
1
Featured Recommended AI Models